健康素养是2030年健康人民的主要重点,这是美国国家目标和目标的第五次迭代。健康素养较低的人通常会遵循访问后的说明以及使用处方,这会导致健康结果和严重的健康差异。在这项研究中,我们建议通过自动在给定句子中翻译文盲语言来利用自然语言处理技术来提高患者教育材料的健康素养。我们从四个在线健康信息网站上刮擦了患者教育材料:medlineplus.gov,drugs.com,mayoclinic.org和reddit.com。我们分别在银标准培训数据集和黄金标准测试数据集上培训并测试了最先进的神经机译(NMT)模型。实验结果表明,双向长期记忆(BILSTM)NMT模型的表现超过了来自变压器(BERT)基于NMT模型的双向编码器表示。我们还验证了NMT模型通过比较句子中的健康文盲语言比率来翻译健康文盲语言的有效性。提出的NMT模型能够识别正确的复杂单词并简化为外行语言,同时该模型遭受句子完整性,流利性,可读性的影响,并且难以翻译某些医学术语。
translated by 谷歌翻译
多人在线战场(MOBA)是最成功的游戏类型之一。像英雄联盟这样的MOBA游戏具有竞争性环境,玩家竞争他们的排名。在大多数MOBA游戏中,玩家的排名取决于比赛结果(获胜或输)。由于团队合作的本质,这似乎很自然,但是从某种意义上说,这是不公平的,因为在损失的情况下,付出很多努力的球员失去了排名胜利。为了减少基于团队的排名系统的副作用并公正地评估球员的表现,我们提出了一种新颖的嵌入模型,该模型将球员的动作转换为基于动作对球队胜利的各自贡献的定量分数。我们的模型是使用基于序列的深度学习模型构建的,其新型损失功能在团队比赛中起作用。基于序列的深度学习模型处理从游戏开始到团队游戏中的动作序列,使用GRU单元从上一步和当前输入选择性地采用隐藏状态。损失功能旨在帮助动作得分反映球队的最终成绩和成功。我们表明,我们的模型可以公平地评估玩家的个人表现,并分析玩家各自动作的贡献。
translated by 谷歌翻译
We propose a new causal inference framework to learn causal effects from multiple, decentralized data sources in a federated setting. We introduce an adaptive transfer algorithm that learns the similarities among the data sources by utilizing Random Fourier Features to disentangle the loss function into multiple components, each of which is associated with a data source. The data sources may have different distributions; the causal effects are independently and systematically incorporated. The proposed method estimates the similarities among the sources through transfer coefficients, and hence requiring no prior information about the similarity measures. The heterogeneous causal effects can be estimated with no sharing of the raw training data among the sources, thus minimizing the risk of privacy leak. We also provide minimax lower bounds to assess the quality of the parameters learned from the disparate sources. The proposed method is empirically shown to outperform the baselines on decentralized data sources with dissimilar distributions.
translated by 谷歌翻译
Independent component analysis (ICA) is a blind source separation method to recover source signals of interest from their mixtures. Most existing ICA procedures assume independent sampling. Second-order-statistics-based source separation methods have been developed based on parametric time series models for the mixtures from the autocorrelated sources. However, the second-order-statistics-based methods cannot separate the sources accurately when the sources have temporal autocorrelations with mixed spectra. To address this issue, we propose a new ICA method by estimating spectral density functions and line spectra of the source signals using cubic splines and indicator functions, respectively. The mixed spectra and the mixing matrix are estimated by maximizing the Whittle likelihood function. We illustrate the performance of the proposed method through simulation experiments and an EEG data application. The numerical results indicate that our approach outperforms existing ICA methods, including SOBI algorithms. In addition, we investigate the asymptotic behavior of the proposed method.
translated by 谷歌翻译
We propose a domain adaptation method, MoDA, which adapts a pretrained embodied agent to a new, noisy environment without ground-truth supervision. Map-based memory provides important contextual information for visual navigation, and exhibits unique spatial structure mainly composed of flat walls and rectangular obstacles. Our adaptation approach encourages the inherent regularities on the estimated maps to guide the agent to overcome the prevalent domain discrepancy in a novel environment. Specifically, we propose an efficient learning curriculum to handle the visual and dynamics corruptions in an online manner, self-supervised with pseudo clean maps generated by style transfer networks. Because the map-based representation provides spatial knowledge for the agent's policy, our formulation can deploy the pretrained policy networks from simulators in a new setting. We evaluate MoDA in various practical scenarios and show that our proposed method quickly enhances the agent's performance in downstream tasks including localization, mapping, exploration, and point-goal navigation.
translated by 谷歌翻译
Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
Various depth estimation models are now widely used on many mobile and IoT devices for image segmentation, bokeh effect rendering, object tracking and many other mobile tasks. Thus, it is very crucial to have efficient and accurate depth estimation models that can run fast on low-power mobile chipsets. In this Mobile AI challenge, the target was to develop deep learning-based single image depth estimation solutions that can show a real-time performance on IoT platforms and smartphones. For this, the participants used a large-scale RGB-to-depth dataset that was collected with the ZED stereo camera capable to generated depth maps for objects located at up to 50 meters. The runtime of all models was evaluated on the Raspberry Pi 4 platform, where the developed solutions were able to generate VGA resolution depth maps at up to 27 FPS while achieving high fidelity results. All models developed in the challenge are also compatible with any Android or Linux-based mobile devices, their detailed description is provided in this paper.
translated by 谷歌翻译
遗憾已被广泛用作评估分布式多代理系统在线优化算法的性能的首选指标。但是,与代理相关的数据/模型变化可以显着影响决策,并需要在代理之间达成共识。此外,大多数现有的作品都集中在开发(强烈或非严格地)凸出的方法上,对于一般非凸损失的分布式在线优化中的遗憾界限,几乎没有得到很少的结果。为了解决这两个问题,我们提出了一种新型的综合遗憾,并使用新的基于网络的基于遗憾的度量标准来评估分布式在线优化算法。我们具体地定义了复合遗憾的静态和动态形式。通过利用我们的综合遗憾的动态形式,我们开发了一种基于共识的在线归一化梯度(CONGD)的伪convex损失方法,事实证明,它显示了与最佳器路径变化的规律性术语有关的透明性行为。对于一般的非凸损失,我们首先阐明了基于最近进步的分布式在线非凸学习的遗憾,因此没有确定性算法可以实现sublinear的遗憾。然后,我们根据离线优化的Oracle开发了分布式的在线非凸优化(Dinoco),而无需进入梯度。迪诺科(Dinoco)被证明是统一的遗憾。据我们所知,这是对一般分布在线非convex学习的第一个遗憾。
translated by 谷歌翻译
在分布式深度学习的背景下,陈旧的权重或梯度的问题可能导致算法性能差。这个问题通常通过延迟耐受算法来解决,并在目标函数和步进尺寸上有一些温和的假设。在本文中,我们提出了一种不同的方法来开发一种新算法,称为$ \ textbf {p} $ redicting $ \ textbf {c} $ lipping $ \ textbf {a} $ synchronous $ \ textbf {s} textbf {g} $ radient $ \ textbf {d} $ escent(aka,pc-asgd)。具体而言,PC -ASGD有两个步骤 - $ \ textIt {预测步骤} $利用泰勒扩展利用梯度预测来减少过时的权重的稳固性,而$ \ textit {clivipping step} $选择性地降低了过时的权重,以减轻过时的权重他们的负面影响。引入权衡参数以平衡这两个步骤之间的影响。从理论上讲,考虑到平滑的物镜函数弱键和非凸的延迟延迟的延迟,我们介绍了收敛速率。还提出了一种实用的PC-ASGD变体,即采用条件来帮助确定权衡参数。对于经验验证,我们在两个基准数据集上使用两个深神经网络体系结构演示了该算法的性能。
translated by 谷歌翻译
磁共振图像的降解有益于提高低信噪比图像的质量。最近,使用深层神经网络进行DENOSING表现出了令人鼓舞的结果。但是,这些网络大多数都利用监督学习,这需要大量的噪声和清洁图像对的培训图像。获得训练图像,尤其是干净的图像,既昂贵又耗时。因此,已经开发了仅需要成对噪声浪费图像的噪声2Noise(N2N)之类的方法来减轻获得训练数据集的负担。在这项研究中,我们提出了一种新的自我监督的denoising方法Coil2Coil(C2C),该方法不需要获取干净的图像或配对的噪声浪费图像进行训练。取而代之的是,该方法利用了从分阶段阵列线圈中的多通道数据来生成训练图像。首先,它将多通道线圈图像分为两个图像,一个用于输入,另一个用于标签。然后,它们被处理以施加噪声独立性和敏感性归一化,以便它们可用于N2N的训练图像。为了推断,该方法输入了一个线圈组合的图像(例如DICOM图像),从而允许该方法的广泛应用。当使用合成噪声添加的图像进行评估时,C2C对几种自我监督方法显示了最佳性能,从而报告了与监督方法的可比结果。在测试DICOM图像时,C2C成功地将真实噪声降低,而没有显示误差图中的结构依赖性残差。由于不需要对清洁或配对图像进行额外扫描的显着优势,因此可以轻松地用于各种临床应用。
translated by 谷歌翻译